151 research outputs found

    Tumor Segmentation and Classification Using Machine Learning Approaches

    Get PDF
    Medical image processing has recently developed progressively in terms of methodologies and applications to increase serviceability in health care management. Modern medical image processing employs various methods to diagnose tumors due to the burgeoning demand in the related industry. This study uses the PG-DBCWMF, the HV area method, and CTSIFT extraction to identify brain tumors that have been combined with pancreatic tumors. In terms of efficiency, precision, creativity, and other factors, these strategies offer improved performance in therapeutic settings. The three techniques, PG-DBCWMF, HV region algorithm, and CTSIFT extraction, are combined in the suggested method. The PG-DBCWMF (Patch Group Decision Couple Window Median Filter) works well in the preprocessing stage and eliminates noise. The HV region technique precisely calculates the vertical and horizontal angles of the known images. CTSIFT is a feature extraction method that recognizes the area of tumor images that is impacted. The brain tumor and pancreatic tumor databases, which produce the best PNSR, MSE, and other results, were used for the experimental evaluation

    Assessment of Information Technology Use Competence for Teachers: Identifying and Applying the Information Technology Competence Framework in Online Teaching

    Get PDF
    This paper proposes a theoretical framework as a foundation for building information technology competence framework and the requirements for using information technology competence of teachers in online teaching at training institutions. The parameters in this paper survey was conducted on sample space (n = 342) and 42 expert opinions to identify information technology competence framework with criteria and skill sets necessaries to successfully organize online teaching. This paper discusses on teaching developing information technology competence to change minds and develop teachers' competency to meet the online teaching trend of the digitalization today. So, building information technology competence framework in online teaching has many meanings in training process contribute to improving the learning capacity of students

    DEVELOPING THE INFORMATION TECHNOLOGY APPLICATION COMPETENCE OF TEACHERS IN ONLINE TEACHING

    Get PDF
    Developing the competence to use information technology in teaching is one of the important occupational competencies for teachers in the digital age. Information technology application development has many implications in promoting the training process to train and develop students, in accordance with the actual conditions of education in Vietnam and the general trend of the world is essential.Research paper on needs assessment using information technology of teachers in online teaching, proposing the process of identifying the structure of information technology competencies and requirements for capacity development to use information technology in online teaching of training institutions. The parameters in this paper present an empirical research result to address the need to develop the information technology application competence in online teaching, necessary to successful organize online teaching with a variety of theoretical and practical pedagogies in technology in education

    Consideration of Data Security and Privacy Using Machine Learning Techniques

    Get PDF
    As artificial intelligence becomes more and more prevalent, machine learning algorithms are being used in a wider range of domains. Big data and processing power, which are typically gathered via crowdsourcing and acquired online, are essential for the effectiveness of machine learning. Sensitive and private data, such as ID numbers, personal mobile phone numbers, and medical records, are frequently included in the data acquired for machine learning training. A significant issue is how to effectively and cheaply protect sensitive private data. With this type of issue in mind, this article first discusses the privacy dilemma in machine learning and how it might be exploited before summarizing the features and techniques for protecting privacy in machine learning algorithms. Next, the combination of a network of convolutional neural networks and a different secure privacy approach is suggested to improve the accuracy of classification of the various algorithms that employ noise to safeguard privacy. This approach can acquire each layer's privacy budget of a neural network and completely incorporates the properties of Gaussian distribution and difference. Lastly, the Gaussian noise scale is set, and the sensitive information in the data is preserved by using the gradient value of a stochastic gradient descent technique. The experimental results showed that a balance of better accuracy of 99.05% between the accessibility and privacy protection of the training data set could be achieved by modifying the depth differential privacy model's parameters depending on variations in private information in the data

    A Recent Connected Vehicle - IoT Automotive Application Based on Communication Technology

    Get PDF
    Realizing the full potential of vehicle communications depends in large part on the infrastructure of vehicular networks. As more cars are connected to the Internet and one another, new technological advancements are being driven by a multidisciplinary approach. As transportation networks become more complicated, academic, and automotive researchers collaborate to offer their thoughts and answers. They also imagine various applications to enhance mobility and the driving experience. Due to the requirement for low latency, faster throughput, and increased reliability, wireless access technologies and an appropriate (potentially dedicated) infrastructure present substantial hurdles to communication systems. This article provides a comprehensive overview of the wireless access technologies, deployment, and connected car infrastructures that enable vehicular connectivity. The challenges, issues, services, and maintenance of connected vehicles that rely on infrastructure-based vehicular communications are also identified in this paper

    EVALUATION OF PRESCRIBING INDICATORS FOR PEADIATRIC OUTPATIENTS UNDER SIX YEARS OLD IN DISTRICT HOSPITALS OF CAN THO CITY IN THE PERIOD OF 2015-2016

    Get PDF
    Objective: Examining and comparing the primary and supplementary prescribing indicators in pediatric outpatients under six years old. Methods: We performed a comparative cross-sectional study, over nine months, from September 2015. 800 prescriptions for peadiatric patients under 6 y old were collected at 8 district hospitals in Can Tho city to evaluate the primary and supplementary prescribing indicators. The sample was collected prospectively by the systematic selection, with the interval between the patients is 5. The data was analysed and compared to the standard drug use indicators in developing countries recommended by WHO. Results: Average number of drugs per encounter: 4.1, percentage of drugs prescribed by generic name: 94.2%, percentage of encounters with an antibiotic prescribed: 85.8%, percentage of drugs prescribed from essential drugs list by Ministry of Health: 78.7%, percentage of encounters with a corticoid prescribed: 41.7%, percentage of encounters with a vitamin prescribed: 13.1%, average drug cost per encounter: 37.5 thousands VND, percentage of drug costs spent on antibiotics: 55.2%, percentage of drug costs spent on essential drugs: 75.7%, percentage of drug costs spent on corticoid: 1.9%, percentage of drug costs spent on vitamin: 1.4%. Conclusion: The results of this research have identified some issues in outpatient prescribing, which may lead to intervention studies for evaluating changes in these issues in the outpatient clinic

    ViCGCN: Graph Convolutional Network with Contextualized Language Models for Social Media Mining in Vietnamese

    Full text link
    Social media processing is a fundamental task in natural language processing with numerous applications. As Vietnamese social media and information science have grown rapidly, the necessity of information-based mining on Vietnamese social media has become crucial. However, state-of-the-art research faces several significant drawbacks, including imbalanced data and noisy data on social media platforms. Imbalanced and noisy are two essential issues that need to be addressed in Vietnamese social media texts. Graph Convolutional Networks can address the problems of imbalanced and noisy data in text classification on social media by taking advantage of the graph structure of the data. This study presents a novel approach based on contextualized language model (PhoBERT) and graph-based method (Graph Convolutional Networks). In particular, the proposed approach, ViCGCN, jointly trained the power of Contextualized embeddings with the ability of Graph Convolutional Networks, GCN, to capture more syntactic and semantic dependencies to address those drawbacks. Extensive experiments on various Vietnamese benchmark datasets were conducted to verify our approach. The observation shows that applying GCN to BERTology models as the final layer significantly improves performance. Moreover, the experiments demonstrate that ViCGCN outperforms 13 powerful baseline models, including BERTology models, fusion BERTology and GCN models, other baselines, and SOTA on three benchmark social media datasets. Our proposed ViCGCN approach demonstrates a significant improvement of up to 6.21%, 4.61%, and 2.63% over the best Contextualized Language Models, including multilingual and monolingual, on three benchmark datasets, UIT-VSMEC, UIT-ViCTSD, and UIT-VSFC, respectively. Additionally, our integrated model ViCGCN achieves the best performance compared to other BERTology integrated with GCN models

    A Bibliometrics study on homework from 1977 to 2020

    Get PDF
    Homework effectiveness has long been a controversial issue for many educators, schools, and parents. Many researchers have tried to prove that homework is not just a nuisance we all have to face throughout the years but really can build character and good for students, like teachers and parents typically say. This bibliometrics review studied 429 documents related to homework in education from the Clarivate Web of Science from 1977 to 2020. This study aims to record the volume, growth pattern of homework literature and identify critical authors, publications, and topics of this knowledge base. The review found that the homework literature has grown remarkably over the past 43 years, with the most cited authors are from the US, Germany, and Portugal. Using co-citation, co-occurrence and bibliographic coupling analysis from the VOSviewer program, this research also indicated significant results and suggestions for future research. The findings showed that homework literature increased gradually in volume, with the most publications originated in the USA, and critical authors were Cooper, Xu, and Trautwein. Additionally, five common research topics were illustrated, namely the effect of homework and its measurement, homework environment, homework tasks and feedback, family involvement in homework, and time and effort

    Probabilistic Schema Covering

    Get PDF
    Schema covering is the process of representing large and complex schemas by easily comprehensible common objects. This task is done by identifying a set of common concepts from a repository called concept repository and generating a cover to describe the schema by the concepts. Traditional schema covering approach has two shortcomings: it does not model the uncertainty in the covering process, and it requires user to state an ambiguity constraint which is hard to define. We remedy this problem by incorporating probabilistic model into schema covering to generate probabilistic schema cover. The integrated probabilities not only enhance the coverage of cover results but also eliminate the need of defining the ambiguity parameter. Both probabilistic schema covering and traditional schema covering run on top of a concept repository. Experiments on real-datasets show the competitive performance of our approach

    A Survey on Some Parameters of Beef and Buffalo Meat Quality

    Full text link
    A survey was carried out on 13 Vietnamese Yellow cattle, 14 LaiSind cattle and 18 buffalos in Hanoi to estimate the quality of longissimus dorsi in terms of pH, color, drip loss, cooking loss and tenderness at 6 different postmortem intervals. It was found that the pH value of longissimus dorsi was not significantly different among the 3 breeds (P>0.05), being reduced rapidly during the first 36 hours postmortem, and then stayed stable. The value was in the range that was considered to be normal. Conversely, the color values L*, a* and b* tended to increase and also stable at 36 hours postmortem, except that for LaiSind cattle at 48 hours. According to L* scale, the meat of Yellow and LaiSind cattle met the normal quality but the buffalo meat was considered to be dark cutters. The tenderness of longissimus dorsi was significantly different among the breeds (P<0.05). The value was highest at 48 hours and then decreased for LaiSind and buffalo, but for Yellow cattle the value decreased continuously after slaughtering In terms of tenderness buffalo meat and Yellow cattle meat were classified as “intermediate”, while LaiSind meat was out of this interval and classified as “tough”. Drip loss ratio was increased with the time of preservation (P<0.05). The cooking loss ratio was lowest at 12 hours and higher at the next period, but there was no significant difference among the periods after 36 hours postmotem.Peer reviewe
    corecore